43 research outputs found

    Capacity Bounds for a Class of Diamond Networks

    Full text link
    A class of diamond networks are studied where the broadcast component is modelled by two independent bit-pipes. New upper and low bounds are derived on the capacity which improve previous bounds. The upper bound is in the form of a max-min problem, where the maximization is over a coding distribution and the minimization is over an auxiliary channel. The proof technique generalizes bounding techniques of Ozarow for the Gaussian multiple description problem (1981), and Kang and Liu for the Gaussian diamond network (2011). The bounds are evaluated for a Gaussian multiple access channel (MAC) and the binary adder MAC, and the capacity is found for interesting ranges of the bit-pipe capacities

    Benefits of Cache Assignment on Degraded Broadcast Channels

    Get PDF
    International audienceDegraded K-user broadcast channels (BCs) are studied when the receivers are facilitated with cache memories. Lower and upper bounds are derived on the capacity-memory tradeoff, i.e., on the largest rate of reliable communication over the BC as a function of the receivers' cache sizes, and the bounds are shown to match for interesting special cases. The lower bounds are achieved by two new coding schemes that benefit from nonuniform cache assignments. Lower and upper bounds are also established on the global capacity-memory tradeoff, i.e., on the largest capacity-memory tradeoff that can be attained by optimizing the receivers' cache sizes subject to a total cache memory budget. The bounds coincide when the total cache memory budget is sufficiently small or sufficiently large, where the thresholds depend on the BC statistics. For small cache memories, it is optimal to assign all the cache memory to the weakest receiver. In this regime, the global capacity-memory tradeoff grows by the total cache memory budget divided by the number of files in the system. In other words, a perfect global caching gain is achievable in this regime and the performance corresponds to a system where all the cache contents in the network are available to all receivers. For large cache memories, it is optimal to assign a positive cache memory to every receiver, such that the weaker receivers are assigned larger cache memories compared to the stronger receivers. In this regime, the growth rate of the global capacity-memory tradeoff is further divided by the number of users, which corresponds to a local caching gain. It is observed numerically that a uniform assignment of the total cache memory is suboptimal in all regimes, unless the BC is completely symmetric. For erasure BCs, this claim is proved analytically in the regime of small cache sizes

    Real-time Sampling and Estimation on Random Access Channels: Age of Information and Beyond

    Full text link
    Efficient sampling and remote estimation are critical for a plethora of wireless-empowered applications in the Internet of Things and cyber-physical systems. Motivated by such applications, this work proposes decentralized policies for the real-time monitoring and estimation of autoregressive processes over random access channels. Two classes of policies are investigated: (i) oblivious schemes in which sampling and transmission policies are independent of the processes that are monitored, and (ii) non-oblivious schemes in which transmitters causally observe their corresponding processes for decision making. In the class of oblivious policies, we show that minimizing the expected time-average estimation error is equivalent to minimizing the expected age of information. Consequently, we prove lower and upper bounds on the minimum achievable estimation error in this class. Next, we consider non-oblivious policies and design a threshold policy, called error-based thinning, in which each source node becomes active if its instantaneous error has crossed a fixed threshold (which we optimize). Active nodes then transmit stochastically following a slotted ALOHA policy. A closed-form, approximately optimal, solution is found for the threshold as well as the resulting estimation error. It is shown that non-oblivious policies offer a multiplicative gain close to 33 compared to oblivious policies. Moreover, it is shown that oblivious policies that use the age of information for decision making improve the state-of-the-art at least by the multiplicative factor 22. The performance of all discussed policies is compared using simulations. The numerical comparison shows that the performance of the proposed decentralized policy is very close to that of centralized greedy scheduling

    Neural Estimation of the Rate-Distortion Function With Applications to Operational Source Coding

    Full text link
    A fundamental question in designing lossy data compression schemes is how well one can do in comparison with the rate-distortion function, which describes the known theoretical limits of lossy compression. Motivated by the empirical success of deep neural network (DNN) compressors on large, real-world data, we investigate methods to estimate the rate-distortion function on such data, which would allow comparison of DNN compressors with optimality. While one could use the empirical distribution of the data and apply the Blahut-Arimoto algorithm, this approach presents several computational challenges and inaccuracies when the datasets are large and high-dimensional, such as the case of modern image datasets. Instead, we re-formulate the rate-distortion objective, and solve the resulting functional optimization problem using neural networks. We apply the resulting rate-distortion estimator, called NERD, on popular image datasets, and provide evidence that NERD can accurately estimate the rate-distortion function. Using our estimate, we show that the rate-distortion achievable by DNN compressors are within several bits of the rate-distortion function for real-world datasets. Additionally, NERD provides access to the rate-distortion achieving channel, as well as samples from its output marginal. Therefore, using recent results in reverse channel coding, we describe how NERD can be used to construct an operational one-shot lossy compression scheme with guarantees on the achievable rate and distortion. Experimental results demonstrate competitive performance with DNN compressors
    corecore